Improved wavefield reconstruction from randomized sampling via weighted one-norm minimization

نویسندگان

  • Hassan Mansour
  • Felix J. Herrmann
  • Özgür Yılmaz
چکیده

Missing-trace interpolation aims to reconstruct regularly sampled wavefields from periodically sampled data with gaps caused by physical constraints. While transformdomain sparsity promotion has proven to be an effective tool to solve this recovery problem, recent developments in randomized acquisition in marine, via randomized coil sampling or randomized streamers with deliberate feathering, and on land, via randomization of the source and/or receiver positions, expose vulnerabilities in the current recovery techniques that aside from the selection of the proper transform domain make no use of a priori information on the transform-domain coefficients. To overcome these vulnerabilities in solving the recovery problem for large-scale problems, we propose recovery by weighted one-norm minimization, which exploits correlations between locations of significant coefficients of different partitions, e.g., shot records, common-offset gathers, or frequency slices, of the acquired data. We use these correlations to define a sequence of 2D curvelet-based recovery problems that exploit 3D continuity exhibited by seismic wavefields without relying on the highly redundant 3D curvelet transform. To illustrate the performance of our weighted algorithm, we compare recoveries from different scenarios of partitioning for a seismic line from the Gulf of Suez. These examples demonstrate that our method is superior to standard `1 minimization in terms of reconstruction quality and computational memory requirements. INTRODUCTION The problem of interpolating irregularly sampled and missing seismic data to a regular periodic grid often occurs in 2D and 3D seismic settings. Since most multi-trace processing algorithms do not handle irregularly sampled (and aliased) data, interpolation to a regular grid is necessary in order to process and subsequently interpret the data. Irregular sampling often leads to image artifacts (referred to as acquisition footprint) due to the uneven illumination of the subsurface as well as poor levels of repeatability between time-lapse imaging. To overcome these problems, the acquired dataset is regularized and/or interpolated (Hennenfent et al., 2010). Regularization, or bin centering, simply relocates traces from their irregular recording locations to locations on a regular grid. The traces may be altered depending on the method used and some traces may be discarded. On the other hand, interpolation generates synthesized traces that are interleaved with the recorded ∗Corresponding author †Mathematics Department The University of British Columbia ‡Seismic Laboratory of Imaging and Modeling The University of British Columbia The University of British Columbia Technical Report. TR-2012-08, September 14, 2012 Mansour et al. 2 Weighted one-norm minimization traces in order to increase the spatial sampling rate. By combining regularization with interpolation, it is possible to significantly impact the quality of pre-stack time migration (Symes, 2007), 3-D surface-related multiple elimination (Berkhout and Verschuur, 1997; Verschuur and Berkhout, 1997), wave-equation pre-stack depth migration (Claerbout, 1971), and time-lapse imaging where interpolation and regularization homogenize the sampling between all vintages of the 4-D dataset. In this paper, we are particularly interested in the interpolation aspect of the recovery from randomly sampled seismic data. Recently, it has been shown both by simulation (Herrmann and Hennenfent, 2008; Hennenfent and Herrmann, 2006) and in practice (Mosher et al., 2012) that randomizing the acquisition of seismic data by randomly spacing the receivers as an acquisition design guideline produces better interpolated traces than from regularly spaced receivers. The recovery from randomly located receivers has been addressed by several works in the seismic exploration literature. For instance, Claerbout (1992) and Spitz (1991) formulate the interpolation problem as a least-squares optimization problem where the missing traces are interpolated using a prediction filter that is estimated in the frequency domain. Alternatively, transform-based methods have dominated the literature in recent years utilizing particular transforms e.g. the (non)uniform discrete Fourier transform (see e.g. Sacchi and Ulrych, 1996; Liu and Sacchi, 2004; Duijndam et al., 1999; Zwartjes, 2005), the Radon transform (see e.g. Thorson and Claerbout, 1985; Hampson, 1986), and the curvelet transform (see e.g. Hennenfent and Herrmann, 2005, 2006; Herrmann and Hennenfent, 2008; Hennenfent et al., 2010; Naghizadeh and Sacchi, 2010; Neelamani et al., 2010). Recently, compressed sensing (Donoho, 2006; Candès and Tao, 2006) has emerged as a process of acquiring incomplete random linear measurements of a signal and then reconstructing it by utilizing the prior knowledge that the signal is sparse or compressible in some transform domain. In seismic exploration, data consists of wavefronts that exhibit structure in multiple dimensions. In the curvelet transform domain, it is possible to capture this structure by a small number of significant transform coefficients, resulting in a sparse representation of the data. Several works in the literature have formulated the seismic data interpolation problem as an instance of recovery from compressive samples and this approach resulted in considerable improvement in reconstruction quality (see e.g., Hennenfent and Herrmann, 2005, 2006; Herrmann et al., 2012). The interpolation problem becomes that of finding the curvelet synthesis coefficients with the smallest `1 norm that best fits the randomly subsampled data in the physical domain. Fully utilizing transform domain sparsity sometimes requires the use of redundant transforms, i.e., transforms that can result in a significant expansion in the dimensionality of the model space. For example, due to the redundancy of the directional curvelet transform, applying the the 3D curvelet transform to a seismic line induces a curvelet domain representation with a 24 fold expansion in the dimension seismic line. A less optimal but memory saving approach applies the 2D curvelet transform to two dimensional partitions (or slices) of a seismic line. Consequently, the data is interpolated by solving several `1 minimization problems that capture the structure in every 2D partition of the seismic line. This approach can be parallelized since the recovery of each 2D partition is independent of the remaining partitions. However, recovering 2D partitions independently does not utilize the wavefront continuity that exists across partitions. This is due to the fact that the `1 minimization problem does not incorporate additional prior information related to the structure of the The University of British Columbia Technical Report. TR-2012-08, September 14, 2012 Mansour et al. 3 Weighted one-norm minimization signal. In this paper, we propose to use weighted `1 minimization to improve the performance of transform based data interpolation when information related to the locations of the non-zero entries (also called support) of the signal is available. The continuity of wavefronts across adjacent partitions of a seismic line manifests itself as a high correlation in the support of the curvelet coefficients of the partitions. Moreover, by virtue of the transient character of seismic source functions, this correlation is also present amongst the support of different frequency slices. Our approach is motivated by the results in (Friedlander et al., 2012), which proved that given a support estimate that is highly correlated with the true support of the signal, the solution of the weighted `1 minimization problem has better recovery performance than that of standard `1 minimization. The idea of using weights to improve the recovery of regularized inverse problems has been previously explored in the literature. Liu and Sacchi (2004) proposed minimizing a weighted `2 norm in the wavenumber domain that constrains the solution to be spatially bandlimited and imposes a prior spectral shape. Our approach differs from (Liu and Sacchi, 2004) in that we make no assumption on the bandlimitedness of the signal. Instead, we utilize the sparsity and correlated structure of seismic data partitions in the curvelet domain and solve a sequence of weighted `1 minimization problems that is well suited for such structure. We present numerical simulations conducted on a subsampled seismic line from the Gulf of Suez demonstrating that weighted `1 minimization significantly outperforms standard `1 minimization both in terms of the recovered signal-to-noise-ratio (SNR) and a visual inspection of the quality of reconstructed shot gathers. Outline We start by formulating the seismic data interpolation problem as a sparse recovery problem and then present an overview of the recovery guarantees of `1 minimization and weighted `1 minimization. Next, we describe how the interpolation of a seismic line can be achieved by solving a sequence weighted `1 minimization problems and we discuss different ways of partitioning seismic lines to improve the recovery. Finally, we present the results of numerical simulations that illustrate the performance of weighted `1 in interpolating real seismic data. THEORY Seismic data interpolation by sparse recovery Seismic data interpolation is an underdetermined inverse problem since a high dimensional signal model is recovered from a smaller number of measurements. The signal model is assumed to be sparse (or nearly sparse) since the forward model is taken to be the inverse of the redundant curvelet transform. Therefore, the interpolation approach minimizes a data misfit between the measurements and the forward model in addition to a sparse regularization term such as the `1 norm that captures sparsity of the signal model (Hennenfent and Herrmann, 2006). Consider a seismic line with Ns sources, Nr receivers, and Nt time samples. We assume that all sources see the same receivers, a scenario which is becoming more feasible with the The University of British Columbia Technical Report. TR-2012-08, September 14, 2012 Mansour et al. 4 Weighted one-norm minimization advent of receivers that are deployed via autonomous underwater nodes and wireless landbased geophones (Howie et al., 2008; Savazzi and Spagnolini, 2008). The unknown fullysampled seismic line can be reshaped into an N dimensional vector f , where N = NsNrNt. It is well known that seismic data admit sparse representations by curvelets that capture “wavefront sets” efficiently (see e.g., Smith, 1998; Candès and Demanet, 2005; Candès et al., 2006a; Herrmann et al., 2008). Therefore, we wish to recover a sparse approximation f̃ of the discretized wavefield f from measurements b = RMf , where RM is a sampling operator composed of the product of a restriction matrix R with a measurement basis matrix M. The measurement matrix M represents the basis in which the measurements are taken and corresponds to the Dirac (identity) basis in the missing trace interpolation scenario. Let S be a sparsifying operator that characterizes the transform domain of f , such that S ∈ CP×N with P ≥ N . In the case of the redundant curvelet transform (Candès et al., 2006a), S is a tight frame with P > N and SHS = I, and the transform domain representation x of f in S is not unique. The curvelet transform is highly redundant and when all angles and the finest scales are incorporated it could result in the frame expansion P N ≈ 8 in the 2D case or P N ≈ 24 in the 3D case. This redundancy could easily become a computational impediment when scaling the seismic surveys to 3D seismic. Let A := RMS , the measurements b can then be written as b = Ax, where x is the S-transform of f . We obtain the sparse approximation f̃ of f by first finding the vector x̃ that solves the sparse recovery problem with the underdetermined linear constraints Ax̃ = b, and then computing f̃ = SH x̃, where the superscript H denotes the Hermitian transpose. Next, we formulate the corresponding sparse recovery problem that recovers the estimate x̃ of the curvelet synthesis coefficients. The sparse recovery problem Let x be a P dimensional vector in CP and let b ∈ Cn, n P represent the compressively sampled data of n measurements. The sparse recovery problem involves solving an underdetermined system of equations b = Ax, (1) where A ∈ Cn×P represents the measurement matrix. When x is k-sparse—i.e., when there are only k < n nonzero entries in x— sparsity-promoting recovery can be achieved by solving the `0 minimization problem x̃ = arg min u∈CP ‖u‖0 subject to b = Au, (2) where x̃ represents the sparse approximation of x, and the `0 norm ‖x‖0 is the number of non-zero entries in x. Note that `0 minimization is a combinatorial optimization problem, which quickly becomes computationally intractable as the size P of the problem increases. The University of British Columbia Technical Report. TR-2012-08, September 14, 2012 Mansour et al. 5 Weighted one-norm minimization However, if it were possible to solve large scale `0 minimization problems in practice and if every n×n submatrix of A is invertible, then x̃ would be equal to x when k < n/2 (Donoho and Elad, 2003). Several greedy algorithms, such as orthogonal matching pursuit (OMP) (Pati et al., 1993; Tropp et al., 2007), and the antileakage Fourier transform (ALFT) (Xu et al., 2005) are also capable of finding the sparsest solution for systems of equations while still being computationally feasible. However, these greedy algorithms can only recover sparse signals with far fewer non-zero entries than what `0 minimization can handle. In fact, Tropp et al. (2007) has shown that given a signal x with sparsity level k . n/ log(P/ρ) < n/2 for some constant ρ ∈ (0, 0.36), if A is a Gaussian random matrix, then OMP succeeds in recovering sparse signals with probability 1−2ρ. Moreover, these greedy algorithms typically only allow a few nonzero components to enter into the solution set for every matrix-vector multiply. While this may not be a problem for Fourier-based methods on small cubes, it quickly becomes a computational problem for curvelets that work on large data volumes. Alternatively, the basis pursuit (BP) (Chen et al., 2001) convex optimization problem shown below, has demonstrably better sparse recovery capabilities than greedy algorithms while still remaining computationally tractable. The BP problem is guaranteed to recover an estimate x̃ for all signals x ∈ CP with sparsity k . n/ log(P/n) < n/2 when A is a Gaussian matrix with independent identically distributed (i.i.d.) entries and normalized columns (Candès et al., 2006b; Donoho, 2006). The BP problem, also known as the `1 minimization problem, is given by x̃ = arg min u∈CP ‖u‖1 subject to b = Au, (3) where x̃ represents an approximation of x, and the `1 norm ‖u‖1 = P ∑ i−1 |(u)(i)| is the sum of absolute values of the elements of a vector u. The BP problem typically finds a compressible∗ or (under some conditions) the sparsest solution that explains the data exactly. So, if the initial data is sparse, or compressible, it is plausible that the solution of (BP) coincides with or approximates well the transform coefficients of the data. Indeed, this was mathematically proven to be the case if A obeys certain “incoherence” properties, formally the restricted isometry property (RIP) (Candès et al., 2006b), and if the original x is sufficiently sparse or compressible. Moreover, the recovery is stable and robust to measurement noise when A and x obey these properties. In fact, the BP problem is guaranteed to recover an approximation x̃ to x for a wider range of measurement matrices A and under less strict conditions than the greedy algorithms mentioned above. Moreover, the reconstruction error is bounded by ‖x̃− x‖2 ≤ C0ε+ C1 √ k ‖xT c 0 ‖1, (4) where C0 and C1 are constants that depend on the condition number of submatrices of A of size n × k, ε is an upper bound on the measurement noise variance, T0 = supp(x|k) is the index set (also called support) of the k largest in magnitude entries of x, and T c 0 is the set complement of T0, i.e. T c 0 is the set of entries with small magnitudes. The one-norm ∗An N dimensional signal is said to be compressible if it can be well approximated by its largest k N coefficients. For example, a signal x is considered to be compressible if the sorted magnitudes of its coefficients decay according to a power law, i.e. the ith largest coefficient will have a magnitude at most |x[i]| ≤ ci−p with p > 1, where c is some scaling constant. The University of British Columbia Technical Report. TR-2012-08, September 14, 2012 Mansour et al. 6 Weighted one-norm minimization ‖xT c 0 ‖1 therefore represents the error in the best k-term approximation of x † . Note that when T0 is unique and x is k-sparse with all the non-zero entries lying in T0, then the term ‖xT c 0 ‖1 = 0 and the bound given by Equation (6) below implies that the recovery by the BP problem should be exact, i.e., x̃ = x. Remark: Successful sparse recovery from the compressive measurements b = Ax, where A = RMS requires that the sampling operator RM be incoherent with the sparsifying transform S (Candès and Tao, 2006). This incoherence can be achieved when the samples are collected on a random subset of a regular grid, i.e. when the restriction operator R corresponds to a random restriction of the identity matrix. The random locations can either be drawn from a uniform distribution or in applications where the signal to be recovered is bandlimited as in seismic interpolation, it was demonstrated by Hennenfent and Herrmann (2008) that jittered sampling can improve the sparse recovery capabilities even further. See (Herrmann et al., 2012) for further discussion. Interpolation with prior support information In many situations, the data f exhibit continuity along one or more of its physical dimensions. In such situations, it is desirable to employ transforms that capitalize on this continuity in order to improve the sparsity of the coefficients x. However, it is often the case that the dimensionality of the problem is too large and recovery is performed by first partitioning (or windowing) the seismic data volumes into frequency slices, or into common offset-azimuth gathers and then solving a sequence of individual subproblems. When f is windowed across the dimensions of continuity, this continuity manifests itself as a high correlation in the support sets of the transform coefficients of the windowed data since the wavefields are repeated in adjacent partitions. Therefore, sequentially recovering the windowed sections provides additional support information that could be incorporated in the recovery algorithm. However, the `1 minimization problem (3) does not incorporate prior information about the support of x. One approach that utilizes prior information in the recovery algorithm is to replace `1 minimization in (3) with weighted `1 minimization min u ‖u‖1,w subject to Au = b, (5) where w ∈ [0, 1]P and ‖u‖1,w := ∑ i wi|ui| is the weighted `1 norm. Consider the case where we are given a support estimate T̃ ⊂ {1, . . . , P} for x with a certain accuracy relative to the true support T0 of the k largest in magnitude entries of x. Friedlander et al. (2012) investigated the performance of weighted `1 minimization, as described in Equation (5), where the weights are assigned such that wi = { 1, i ∈ T̃ c, γ, i ∈ T̃ . Fig. 1 illustrates the allocation of the weights given a support estimate set T̃ . Small weights γ, 0 ≤ γ ≤ 1, are applied to the set T̃ , while elements in the set T̃ c are assigned weights †For orthonormal bases, this best k-term approximation corresponds to selecting the k-largest magnitude transform coefficients. This approximation can be extended to redundant transforms by taking the k largest coefficients that solve the synthesis problem, i.e., that are obtained by solving a sparsifying program. The University of British Columbia Technical Report. TR-2012-08, September 14, 2012 Mansour et al. 7 Weighted one-norm minimization equal to 1. Such a selection of weights decreases the cost of having large nonzero entries on the set T̃ since the weighted one norm of the a vector u will be equal to ‖u‖1,w = γ‖uT̃ ‖1 + ‖uT̃ c‖1. Consequently, solutions x̃ with small entries outside of the set T̃ are more likely to have smaller weighted one-norms than solutions with larger entries outside of T̃ . Therefore, using the weighted `1 norm in problem (5) favours solutions that have small entries outside of T̃ . The accuracy of the support estimate T̃ plays a critical role in determining how well the solution to Equation (5) approximates the true signal x since the weighted `1 problem favours solutions that are supported on T̃ . It was proved by Friedlander et al. (2012) that the reconstruction error from weighted `1 minimization is bounded by ‖x̃− x‖2 ≤ C0(α, γ)ε+ C1(α, γ) √ k ‖xT c 0 ‖1, (6) where ε is an upper bound on the measurement noise variance, and α = |T̃∩T0| |T̃ | is a parameter that quantifies the accuracy of the support estimate and is defined by the proportion of entries in T̃ that also lie in T0. Here |T | denotes the number of entries (cardinality) in a set T . Moreover, the constants C0(α, γ) and C1(α, γ) were shown to be smaller than the corresponding constants for standard `1 minimization when α > 0.5 and γ < 1. The result indicates that when the support estimate is accurate enough, weighted `1 minimization outperforms standard `1 minimization (BP) in terms of accuracy, stability, and robustness. Figure 1: Illustration of a signal x and the weight vector w showing the true support set T0 and the support estimate T̃ . Small weights γ are applied to the entries in the set T̃ in order to decrease the cost of assigning high valued coefficients on this set during the recovery. WEIGHTED `1 MINIMIZATION FOR SEISMIC DATA INTERPOLATION Seismic data is a discretization of the Green’s function, which is a solution to the wave equation restricted to the surface where the receivers are located. As a result, seismic data organized in a seismic line exhibit continuity in the time/frequency dimension as well as continuity across the offset/azimuth directions. In the curvelet domain, this continuity The University of British Columbia Technical Report. TR-2012-08, September 14, 2012 Mansour et al. 8 Weighted one-norm minimization translates into a high correlation between the support sets of adjacent time/frequency slices and of adjacent common-offset/ common-azimuth slices. The high correlation is visible in the size of the intersection between the support sets of adjacent slices. In this section, we setup the weighted `1 minimization problem as a means to exploit this correlation in support and improve the performance of seismic data interpolation. A general algorithm Recall that our objective is to recover a high dimensional seismic data volume f by interpolating between a smaller number of measurements b = RMf collected on an irregular grid defined by the restriction operator R. The measurements therefore represent irregular, in particular, random samples from the high dimensional data volume. In order to cope with the large dimensionality of the problem, we tackle the problem by windowing the data along some dimension and then sequentially recovering the windowed partitions. Let b(j) be the subsampled measurements of the data f (j) in partition j. The corresponding compressed sensing matrix used for sparse recovery is given by A = RS , where R(j) is the subsampling operator restricted to the jth partition, and S is the 2D curvelet transform‡. Here we assumed that M(j) is the identity matrix and removed it from the expression of A(j). The classical approach of sparse recovery finds a sparse approximation of each windowed partition by solving the `1 minimization problem x̃ = arg min u ‖u‖1 s.t. Au = b for each partition j independent of the remaining partitions. Alternatively, we propose to use the support information of the previously recovered partition in order to apply weights to a support estimate T̃ of the adjacent partition and then solve a weighted `1 minimization problem to recover the subsampled partition. We present below a general algorithm that can be applied to arbitrary partitioning of the same subsampled data. Again, the only requirement for improved recovery over standard `1 minimization is that adjacent partitions indexed by j should have sufficiently correlated support sets in the transform domain. Details of the algorithm are presented in the following subsections. Remark: Notice that in step 6 of Algorithm 1, the support estimate is chosen as the locations of the largest k analysis coefficients SS x̃(j−1) = Sf̃ (j−1) of the recovered partitions instead of the synthesis coefficients x̃(j−1). This is due to the fact that the curvelet transform S is highly redundant, causing adjacent partitions to have synthesis coefficients with near but non-overlapping supports. The analysis coefficients, on the other hand, smear the nonzero entries in the synthesis coefficients, allowing the synthesis coefficients of adjacent partitions to fall within the support of the analysis coefficients of previously recovered partitions. A similar approach was used by Saab et al. (2007) to define the weights in their curvelet-based Bayesian primary-multiple separation problem. ‡Although we restrict the partitions in this paper to 2D slices of the seismic line, our approach is general and can be trivially extended to higher dimensional partitions. The University of British Columbia Technical Report. TR-2012-08, September 14, 2012 Mansour et al. 9 Weighted one-norm minimization Algorithm 1 Weighted `1 recovery of seismic data. 1: Input b(j) = A(j)x(j) for all j, where A(j) = R(j)SH , and choose γ, k 2: Output x̃(j) 3: Initialize j = 1 x̃(j) = arg min u ‖u‖1 s.t. A(j)u = b(j) 4: loop 5: j = j + 1 6: T̃ = supp(SS x̃|k) 7: Set wi = { 1, i ∈ T̃ c γ, i ∈ T̃ 8: x̃(j) = arg min u ‖u‖1,w s.t. A(j)u = b(j) 9: end loop Partitioning of irregularly subsampled seismic lines There are several ways in which seismic data can be partitioned while still exploiting the continuity of the waveforms between partitions. We present in this section two partitioning scenarios. In the first scenario, we propose to partition seismic lines in the frequency domain and recover a sequence of frequency slices by sparse recovery scanning from the low frequencies to the high frequencies since the low frequencies are not aliased. In the second scenario, we propose partitioning the data into offset slices (or azimuth slices) and recovering a sequence of common-offset gathers (common-azimuth gathers) by sparse recovery scanning from the near-offsets to the far-offsets. The choice of the partitioning scheme can be important algorithmically. For example, the first scenario is beneficial when combined with wave-equation based inversion, whereas the second scenario is useful when combined with common-offset/azimuth migration. Partitioning in the time/frequency domain Consider the fully sampled seismic line illustrated in Fig. 2(a) by a time slice in the sourcereceiver domain. Random subsampling of the seismic line is simulated by applying the mask shown in Fig. 2(b) to the seismic line. This results in the subsampled data shown in Fig. 2(c), corresponding to measurements collected from irregularly spaced receivers whose locations constitute a randomly chosen subset of the complete regular grid. Sparse recovery in this setup takes advantage of the sparsity of the curvelet synthesis coefficients of a frequency slice in the source-receiver domain. Therefore, we first compute the Fourier transform of the seismic line along the time axis by applying the Nt DFT to the time axis of every source-receiver coodinate. Let f (0) correspond to the lowest frequency slice of the resulting Fourier transformed seismic line. A sparse approximation f̃ (0) is obtained by solving the sparse recovery problem (3). The sparse recovery problem (3) is setup by defining b(0) = R(0)f (0) = R(0)SHx(0) as the subsampled lowest frequency slice, S as the The University of British Columbia Technical Report. TR-2012-08, September 14, 2012 Mansour et al. 10 Weighted one-norm minimization Source number R e c e iv e r n u m b e r

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Low-dose CT reconstruction via L1 dictionary learning regularization using iteratively reweighted least-squares

BACKGROUND In order to reduce the radiation dose of CT (computed tomography), compressed sensing theory has been a hot topic since it provides the possibility of a high quality recovery from the sparse sampling data. Recently, the algorithm based on DL (dictionary learning) was developed to deal with the sparse CT reconstruction problem. However, the existing DL algorithm focuses on the minimiz...

متن کامل

Recovery of Seismic Wavefields Based on Compressive Sensing by an l1-Norm Constrained Trust Region Method and the Piecewise Random Sub-sampling

SUMMARY Due to the influence of variations in landform, geophysical data acquisition is usually sub-sampled. Reconstruction of the seismic wavefield from sub-sampled data is an ill-posed inverse problem. Compressive sensing can be used to recover the original geophysical data from the sub-sampled data. In this paper, we consider the wavefield reconstruction problem as a com-pressive sensing and...

متن کامل

Minimum weighted norm wavefield reconstruction for AVA imaging

Seismic wavefield reconstruction is posed as an inversion problem where, from inadequate and incomplete data, we attempt to recover the data we would have acquired with a denser distribution of sources and receivers. A minimum weighted norm interpolation method is proposed to interpolate prestack volumes before wave-equation amplitude versus angle imaging. Synthetic and real data were used to i...

متن کامل

Compressive imaging by wavefield inversion with group sparsity

Migration relies on multi-dimensional correlations between sourceand residual wavefields. These multi-dimensional correlations are computationally expensive because they involve operations with explicit and full matrices that contain both wavefields. By leveraging recent insights from compressive sampling, we present an alternative method where linear correlation-based imaging is replaced by im...

متن کامل

Tighter Low-rank Approximation via Sampling the Leveraged Element

In this work, we propose a new randomized algorithm for computing a low-rank approximation to a given matrix. Taking an approach different from existing literature, our method first involves a specific biased sampling, with an element being chosen based on the leverage scores of its row and column, and then involves weighted alternating minimization over the factored form of the intended low-ra...

متن کامل

Ulti - Dimensional R Econstruction of S Eismic D Ata

In seismic data processing, we often need to interpolate and extrapolate missing data at spatial locations. The reconstruction problem can be posed as an inverse problem where from inadequate and incomplete data we attempt to reconstruct the seismic wavefield at locations where measurements were not acquired. This thesis presents a wavefield reconstruction scheme called minimum weighted norm in...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2012